-
Notifications
You must be signed in to change notification settings - Fork 886
Structured outputs evaluation blog with dottxt - release early next week #2021
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool and insightful read 💛 I suspect this type of constraining can be very useful in a multitude of products!
I've added a few nits along the way 🤗
evaluation-structured-outputs.md
Outdated
|
||
We can see in the regex that we allow the model to reason for anywhere from 200 to 700 character, then it must declare that “The answer is” and then replies with up to 10 digit number (that cannot start with 0). | ||
|
||
It’s worth mentioning that the regex controlling the structure is similar, but not identical to, the regex used to parse out the answer. We’ve learned there’s an interesting bit of nuance in defining the structure since, like the prompt, it can impact performance. For example notice that `{200,700}` in the regex. This means that the model has 200 to 700 characters to “reason” before answering. Changing these values can impact performance and leads to something we refer to as “thought control”, an area we’re hoping to write more about soon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question here: can't the model still generate "the answer is" in those 200-700 CoT characters?
Co-authored-by: Joao Gante <[email protected]>
Co-authored-by: Joao Gante <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool! Noob question: do all models lend equally well to structured generation, or are there limitations in the technique?
Co-authored-by: Pedro Cuenca <[email protected]>
@pcuenca we don't know yet - this blog is a bunch of preliminary experiments, and it would seem for now that this technique works well across models, but it needs to be confirmed :) |
Thank you all for your comments and useful feedback :) |
Still need to port all images but ready for a light review